Face Generation

In this project, you'll use generative adversarial networks to generate new images of faces.

Get the Data

You'll be using two datasets in this project:

  • MNIST
  • CelebA

Since the celebA dataset is complex and you're doing GANs in a project for the first time, we want you to test your neural network on MNIST before CelebA. Running the GANs on MNIST will allow you to see how well your model trains sooner.

If you're using FloydHub, set data_dir to "/input" and use the FloydHub data ID "R5KrjnANiKVhLWAkpXhNBe".

In [45]:
data_dir = './data'

# FloydHub - Use with data ID "R5KrjnANiKVhLWAkpXhNBe"
data_dir = '/input'


"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper

helper.download_extract('mnist', data_dir)
helper.download_extract('celeba', data_dir)
Found mnist Data
Found celeba Data

Explore the Data

MNIST

As you're aware, the MNIST dataset contains images of handwritten digits. You can view the first number of examples by changing show_n_images.

In [46]:
show_n_images = 25

"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
%matplotlib inline
import os
from glob import glob
from matplotlib import pyplot

mnist_images = helper.get_batch(glob(os.path.join(data_dir, 'mnist/*.jpg'))[:show_n_images], 28, 28, 'L')
pyplot.imshow(helper.images_square_grid(mnist_images, 'L'), cmap='gray')
Out[46]:
<matplotlib.image.AxesImage at 0x7f7ab87ebef0>

CelebA

The CelebFaces Attributes Dataset (CelebA) dataset contains over 200,000 celebrity images with annotations. Since you're going to be generating faces, you won't need the annotations. You can view the first number of examples by changing show_n_images.

In [47]:
show_n_images = 25

"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
mnist_images = helper.get_batch(glob(os.path.join(data_dir, 'img_align_celeba/*.jpg'))[:show_n_images], 28, 28, 'RGB')
pyplot.imshow(helper.images_square_grid(mnist_images, 'RGB'))
Out[47]:
<matplotlib.image.AxesImage at 0x7f7ad06a5320>

Preprocess the Data

Since the project's main focus is on building the GANs, we'll preprocess the data for you. The values of the MNIST and CelebA dataset will be in the range of -0.5 to 0.5 of 28x28 dimensional images. The CelebA images will be cropped to remove parts of the image that don't include a face, then resized down to 28x28.

The MNIST images are black and white images with a single color channel while the CelebA images have 3 color channels (RGB color channel).

Build the Neural Network

You'll build the components necessary to build a GANs by implementing the following functions below:

  • model_inputs
  • discriminator
  • generator
  • model_loss
  • model_opt
  • train

Check the Version of TensorFlow and Access to GPU

This will check to make sure you have the correct version of TensorFlow and access to a GPU

In [48]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf

# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer.  You are using {}'.format(tf.__version__)
print('TensorFlow Version: {}'.format(tf.__version__))

# Check for a GPU
if not tf.test.gpu_device_name():
    warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
    print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
TensorFlow Version: 1.0.0
Default GPU Device: /gpu:0

Input

Implement the model_inputs function to create TF Placeholders for the Neural Network. It should create the following placeholders:

  • Real input images placeholder with rank 4 using image_width, image_height, and image_channels.
  • Z input placeholder with rank 2 using z_dim.
  • Learning rate placeholder with rank 0.

Return the placeholders in the following the tuple (tensor of real input images, tensor of z data)

In [49]:
import problem_unittests as tests

def model_inputs(image_width, image_height, image_channels, z_dim):
    """
    Create the model inputs
    :param image_width: The input image width
    :param image_height: The input image height
    :param image_channels: The number of image channels
    :param z_dim: The dimension of Z
    :return: Tuple of (tensor of real input images, tensor of z data, learning rate)
    """
    # TODO: Implement Function
    input_real = tf.placeholder(tf.float32, shape=(None, image_width, image_height, image_channels), name="input_real")
    input_z = tf.placeholder(tf.float32, shape=(None, z_dim), name="input_z")
    learning_rate = tf.placeholder(tf.float32, name="learning_rate")
    
    return input_real, input_z, learning_rate


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_inputs(model_inputs)
Tests Passed

Discriminator

Implement discriminator to create a discriminator neural network that discriminates on images. This function should be able to reuse the variabes in the neural network. Use tf.variable_scope with a scope name of "discriminator" to allow the variables to be reused. The function should return a tuple of (tensor output of the discriminator, tensor logits of the discriminator).

In [141]:
def discriminator(images, reuse=False):
    """
    Create the discriminator network
    :param image: Tensor of input image(s)
    :param reuse: Boolean if the weights should be reused
    :return: Tuple of (tensor output of the discriminator, tensor logits of the discriminator)
    """
    # TODO: Implement Function
    with tf.variable_scope("discriminator", reuse=reuse):
        alpha = 0.01
        # Input layer is 32x32x3
        layer = images
        layer = tf.layers.conv2d(images, 64, 5, strides=2, padding='same',
                                kernel_initializer= tf.contrib.layers.xavier_initializer())
        layer = tf.maximum(alpha * layer, layer)
        # 16x16x64
        layer = tf.layers.conv2d(layer, 128, 5, strides=2, padding='same', 
                                 kernel_initializer= tf.contrib.layers.xavier_initializer())
        layer = tf.layers.batch_normalization(layer, training=True)
        layer = tf.maximum(alpha * layer, layer)
        # 8x8x128
        
        layer = tf.layers.conv2d(layer, 256, 5, strides=2, padding='same',
                                kernel_initializer= tf.contrib.layers.xavier_initializer())
        layer = tf.layers.batch_normalization(layer, training=True)
        layer = tf.maximum(alpha * layer, layer)
        # 4x4x256
        
        # Flatten it
        layer_shape = layer.get_shape().as_list()
        flat = tf.contrib.layers.flatten(layer)
        logits = tf.layers.dense(flat, 1)
        out = tf.sigmoid(logits)

        return out, logits


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_discriminator(discriminator, tf)
Tests Passed

Generator

Implement generator to generate an image using z. This function should be able to reuse the variabes in the neural network. Use tf.variable_scope with a scope name of "generator" to allow the variables to be reused. The function should return the generated 28 x 28 x out_channel_dim images.

In [162]:
def generator(z, out_channel_dim, is_train=True):
    """
    Create the generator network
    :param z: Input z
    :param out_channel_dim: The number of channels in the output image
    :param is_train: Boolean if generator is being used for training
    :return: The tensor output of the generator
    """
    # TODO: Implement Function
    with tf.variable_scope("generator", reuse=not is_train):
        alpha=0.01
        first_layer_neurons = 1024
        # First fully connected layer
        layer = tf.layers.dense(z, 7*7*first_layer_neurons)
        # Reshape it to start the convolutional stack
        
        layer = tf.reshape(layer, (-1, 7, 7, first_layer_neurons))
        layer = tf.layers.batch_normalization(layer, training=is_train)
        layer = tf.maximum(alpha * layer, layer)
        
        layer = tf.layers.conv2d_transpose(layer, 512, 2, strides=1, padding='SAME',
                                kernel_initializer= tf.contrib.layers.xavier_initializer())
        layer = tf.layers.batch_normalization(layer, training=is_train)
        layer = tf.maximum(alpha * layer, layer)
        layer = tf.nn.dropout(layer, keep_prob=0.5)
        # 7x7x256 now
        
        layer = tf.layers.conv2d_transpose(layer, 256, 2, strides=2, padding='SAME',
                                kernel_initializer= tf.contrib.layers.xavier_initializer())
        layer = tf.layers.batch_normalization(layer, training=is_train)
        layer = tf.maximum(alpha * layer, layer)
        layer = tf.nn.dropout(layer, keep_prob=0.5)
        # 14x14x256 now
        layer = tf.layers.conv2d_transpose(layer, 128, 2, strides=1, padding='SAME',
                                kernel_initializer= tf.contrib.layers.xavier_initializer())
        layer = tf.layers.batch_normalization(layer, training=is_train)
        layer = tf.maximum(alpha * layer, layer)
        #14x14x64 now
        # Output layer
        logits = tf.layers.conv2d_transpose(layer, out_channel_dim, 5, strides=2, padding='same',
                                kernel_initializer= tf.contrib.layers.xavier_initializer())
        # 28x28x3 now
        
        out = tf.tanh(logits)
        
        return out


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_generator(generator, tf)
Tests Passed

Loss

Implement model_loss to build the GANs for training and calculate the loss. The function should return a tuple of (discriminator loss, generator loss). Use the following functions you implemented:

  • discriminator(images, reuse=False)
  • generator(z, out_channel_dim, is_train=True)
In [163]:
def model_loss(input_real, input_z, out_channel_dim):
    """
    Get the loss for the discriminator and generator
    :param input_real: Images from the real dataset
    :param input_z: Z input
    :param out_channel_dim: The number of channels in the output image
    :return: A tuple of (discriminator loss, generator loss)
    """
    # TODO: Implement Function
    
    g_model = generator(input_z, out_channel_dim, is_train=True)
    d_model_real, d_logits_real = discriminator(input_real)
    d_model_fake, d_logits_fake = discriminator(g_model, reuse=True)
    
    d_loss_real = tf.reduce_mean(
        tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels=tf.ones_like(d_model_real)))
    d_loss_fake = tf.reduce_mean(
        tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.zeros_like(d_model_fake)))
    g_loss = tf.reduce_mean(
        tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.ones_like(d_model_fake)))
    
    return d_loss_real + d_loss_fake, g_loss


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_loss(model_loss)
Tests Passed

Optimization

Implement model_opt to create the optimization operations for the GANs. Use tf.trainable_variables to get all the trainable variables. Filter the variables with names that are in the discriminator and generator scope names. The function should return a tuple of (discriminator training operation, generator training operation).

In [164]:
def model_opt(d_loss, g_loss, learning_rate, beta1):
    """
    Get optimization operations
    :param d_loss: Discriminator loss Tensor
    :param g_loss: Generator loss Tensor
    :param learning_rate: Learning Rate Placeholder
    :param beta1: The exponential decay rate for the 1st moment in the optimizer
    :return: A tuple of (discriminator training operation, generator training operation)
    """
    # TODO: Implement Function
    t_vars = tf.trainable_variables()
    
    g_vars = [var for var in t_vars if var.name.startswith('generator')]
    d_vars = [var for var in t_vars if var.name.startswith('discriminator')]
    
    
    with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
        d_train_opt = tf.train.AdamOptimizer(learning_rate, beta1).minimize(d_loss, var_list=d_vars)
        g_train_opt = tf.train.AdamOptimizer(learning_rate, beta1).minimize(g_loss, var_list=g_vars)
    
    return d_train_opt, g_train_opt


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_opt(model_opt, tf)
Tests Passed

Neural Network Training

Show Output

Use this function to show the current output of the generator during training. It will help you determine how well the GANs is training.

In [165]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np

def show_generator_output(sess, n_images, input_z, out_channel_dim, image_mode):
    """
    Show example output for the generator
    :param sess: TensorFlow session
    :param n_images: Number of Images to display
    :param input_z: Input Z Tensor
    :param out_channel_dim: The number of channels in the output image
    :param image_mode: The mode to use for images ("RGB" or "L")
    """
    cmap = None if image_mode == 'RGB' else 'gray'
    z_dim = input_z.get_shape().as_list()[-1]
    example_z = np.random.uniform(-1, 1, size=[n_images, z_dim])

    samples = sess.run(
        generator(input_z, out_channel_dim, False),
        feed_dict={input_z: example_z})

    images_grid = helper.images_square_grid(samples, image_mode)
    pyplot.imshow(images_grid, cmap=cmap)
    pyplot.show()

Train

Implement train to build and train the GANs. Use the following functions you implemented:

  • model_inputs(image_width, image_height, image_channels, z_dim)
  • model_loss(input_real, input_z, out_channel_dim)
  • model_opt(d_loss, g_loss, learning_rate, beta1)

Use the show_generator_output to show generator output while you train. Running show_generator_output for every batch will drastically increase training time and increase the size of the notebook. It's recommended to print the generator output every 100 batches.

In [166]:
import time

def normalize_batch(batch):
    return np.subtract(np.multiply(np.divide(batch, 255), 2), 1)

def train(epoch_count, batch_size, z_dim, learning_rate, beta1, get_batches, data_shape, data_image_mode):
    """
    Train the GAN
    :param epoch_count: Number of epochs
    :param batch_size: Batch Size
    :param z_dim: Z dimension
    :param learning_rate: Learning Rate
    :param beta1: The exponential decay rate for the 1st moment in the optimizer
    :param get_batches: Function to get batches
    :param data_shape: Shape of the data
    :param data_image_mode: The image mode to use for images ("RGB" or "L")
    """
    # TODO: Build Model
    out_channel_dim = data_shape[-1]
    image_width = image_height = data_shape[1]
    
    input_real, input_z, learning_rate_tensor = model_inputs(image_width, image_height, out_channel_dim, z_dim)
    d_loss, g_loss = model_loss(input_real, input_z, out_channel_dim)
    d_train_opt, g_train_opt = model_opt(d_loss, g_loss, learning_rate, beta1)
    
    start_time = time.time()
    with tf.Session() as sess:
        sess.run(tf.global_variables_initializer())
        for epoch_i in range(epoch_count):
            batches = get_batches(batch_size)
            for i, batch_images in enumerate(batches):
                # TODO: Train Model
                batch_images *= 2
                batch_z = np.random.uniform(-1, 1, size=(batch_size, z_dim))
#                 batch_images = normalize_batch(batch_images)
                _, discriminator_loss = sess.run([d_train_opt, d_loss], 
                         {input_z: batch_z, input_real: batch_images})
                _, generator_loss = sess.run(
                    [g_train_opt, g_loss], {input_z: batch_z, input_real: batch_images})
  
                if(i%20 == 0):
                    print("                                                               \r", end="")
                    print("{3}:{2} gener. loss: {0:.5} descr. loss {1:.5}\r".format(generator_loss, discriminator_loss, i, epoch_i), end="")
                    previous_loss = generator_loss
                if(i != 0 and i%100 == 0):
                    show_generator_output(sess, batch_images.shape[0], input_z, out_channel_dim, data_image_mode)
                    
    print("took {0}".format(time.time() - start_time))
                
                

MNIST

Test your GANs architecture on MNIST. After 2 epochs, the GANs should be able to generate images that look like handwritten digits. Make sure the loss of the generator is lower than the loss of the discriminator or close to 0.

In [167]:
batch_size = 64
z_dim = 100
learning_rate = 0.001
beta1 = 0.5


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
epochs = 2

mnist_dataset = helper.Dataset('mnist', glob(os.path.join(data_dir, 'mnist/*.jpg')))
with tf.Graph().as_default():
    train(epochs, batch_size, z_dim, learning_rate, beta1, mnist_dataset.get_batches,
          mnist_dataset.shape, mnist_dataset.image_mode)
0:100 gener. loss: 4.3661 descr. loss 1.602                    
0:200 gener. loss: 1.8258 descr. loss 0.76167                  
0:300 gener. loss: 1.3856 descr. loss 0.81641                  
0:400 gener. loss: 3.1225 descr. loss 1.0832                   
0:500 gener. loss: 1.9398 descr. loss 0.73658                  
0:600 gener. loss: 2.0341 descr. loss 0.67063                  
0:700 gener. loss: 2.0233 descr. loss 0.73598                  
0:800 gener. loss: 2.1685 descr. loss 0.85299                  
0:900 gener. loss: 1.2494 descr. loss 0.82946                  
1:100 gener. loss: 2.2024 descr. loss 0.6722                   
1:200 gener. loss: 1.7816 descr. loss 0.69354                  
1:300 gener. loss: 2.5253 descr. loss 0.50733                  
1:400 gener. loss: 1.2005 descr. loss 0.60589                  
1:500 gener. loss: 2.2325 descr. loss 0.72146                  
1:600 gener. loss: 2.9374 descr. loss 0.59959                  
1:700 gener. loss: 2.9751 descr. loss 0.48058                  
1:800 gener. loss: 3.0121 descr. loss 1.3152                   
1:900 gener. loss: 2.0808 descr. loss 0.74157                  
took 1071.497380256652833 descr. loss 0.51237                  

CelebA

Run your GANs on CelebA. It will take around 20 minutes on the average GPU to run one epoch. You can run the whole epoch or stop when it starts to generate realistic faces.

In [168]:
batch_size = 16
z_dim = z_dim
learning_rate = learning_rate
beta1 = beta1


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
epochs = 1

celeba_dataset = helper.Dataset('celeba', glob(os.path.join(data_dir, 'img_align_celeba/*.jpg')))
with tf.Graph().as_default():
    train(epochs, batch_size, z_dim, learning_rate, beta1, celeba_dataset.get_batches,
          celeba_dataset.shape, celeba_dataset.image_mode)
0:100 gener. loss: 1.7677 descr. loss 1.3776                   
0:200 gener. loss: 3.4823 descr. loss 0.7785                   
0:300 gener. loss: 6.5141 descr. loss 1.4921                   
0:400 gener. loss: 3.1916 descr. loss 0.76651                  
0:500 gener. loss: 3.0367 descr. loss 0.35421                  
0:600 gener. loss: 1.6651 descr. loss 2.236                    
0:700 gener. loss: 5.1752 descr. loss 0.97894                  
0:800 gener. loss: 2.1643 descr. loss 0.86016                  
0:900 gener. loss: 2.1559 descr. loss 1.3                      
0:1000 gener. loss: 1.5647 descr. loss 1.1603                  
0:1100 gener. loss: 1.4145 descr. loss 1.3794                  
0:1200 gener. loss: 1.4055 descr. loss 0.79492                 
0:1300 gener. loss: 1.0313 descr. loss 1.3626                  
0:1400 gener. loss: 1.4725 descr. loss 1.1188                  
0:1500 gener. loss: 2.2153 descr. loss 0.78295                 
0:1600 gener. loss: 2.2853 descr. loss 0.75633                 
0:1700 gener. loss: 1.1841 descr. loss 0.89773                 
0:1800 gener. loss: 2.5545 descr. loss 1.2694                  
0:1900 gener. loss: 1.1368 descr. loss 1.1539                  
0:2000 gener. loss: 2.5789 descr. loss 1.0354                  
0:2100 gener. loss: 1.7414 descr. loss 0.89892                 
0:2200 gener. loss: 2.4976 descr. loss 1.1862                  
0:2300 gener. loss: 1.832 descr. loss 0.84797                  
0:2400 gener. loss: 1.4659 descr. loss 1.2034                  
0:2500 gener. loss: 2.4645 descr. loss 0.89786                 
0:2600 gener. loss: 1.725 descr. loss 1.2325                   
0:2700 gener. loss: 1.6162 descr. loss 1.1471                  
0:2800 gener. loss: 1.524 descr. loss 0.94582                  
0:2900 gener. loss: 2.0522 descr. loss 1.2018                  
0:3000 gener. loss: 1.1231 descr. loss 1.464                   
0:3100 gener. loss: 2.8536 descr. loss 0.96665                 
0:3200 gener. loss: 2.4519 descr. loss 0.886                   
0:3300 gener. loss: 1.4279 descr. loss 1.1356                  
0:3400 gener. loss: 1.515 descr. loss 0.96675                  
0:3500 gener. loss: 2.6604 descr. loss 1.5872                  
0:3600 gener. loss: 2.3553 descr. loss 0.94466                 
0:3700 gener. loss: 2.5346 descr. loss 0.7172                  
0:3800 gener. loss: 2.4865 descr. loss 0.89535                 
0:3900 gener. loss: 1.8536 descr. loss 0.90594                 
0:4000 gener. loss: 2.2948 descr. loss 0.6592                  
0:4100 gener. loss: 1.9312 descr. loss 0.81313                 
0:4200 gener. loss: 1.7852 descr. loss 1.2306                  
0:4300 gener. loss: 1.3403 descr. loss 0.99885                 
0:4400 gener. loss: 2.3577 descr. loss 0.8981                  
0:4500 gener. loss: 2.2802 descr. loss 1.0654                  
0:4600 gener. loss: 2.5521 descr. loss 1.3704                  
0:4700 gener. loss: 2.5508 descr. loss 1.2553                  
0:4800 gener. loss: 2.1636 descr. loss 1.2616                  
0:4900 gener. loss: 1.184 descr. loss 1.737                    
0:5000 gener. loss: 1.4934 descr. loss 0.87385                 
0:5100 gener. loss: 1.5809 descr. loss 1.0957                  
0:5200 gener. loss: 1.5519 descr. loss 1.0062                  
0:5300 gener. loss: 0.77556 descr. loss 1.6092                 
0:5400 gener. loss: 1.3897 descr. loss 1.1928                  
0:5500 gener. loss: 0.98681 descr. loss 0.87876                
0:5600 gener. loss: 2.2027 descr. loss 0.77699                 
0:5700 gener. loss: 2.0699 descr. loss 1.0657                  
0:5800 gener. loss: 2.458 descr. loss 0.54175                  
0:5900 gener. loss: 1.8418 descr. loss 1.0259                  
0:6000 gener. loss: 2.1112 descr. loss 1.571                   
0:6100 gener. loss: 1.0906 descr. loss 1.4253                  
0:6200 gener. loss: 1.7433 descr. loss 1.1657                  
0:6300 gener. loss: 2.7641 descr. loss 0.765                   
0:6400 gener. loss: 1.2175 descr. loss 1.5868                  
0:6500 gener. loss: 1.2961 descr. loss 0.82317                 
0:6600 gener. loss: 2.1663 descr. loss 1.0074                  
0:6700 gener. loss: 1.9165 descr. loss 0.73951                 
0:6800 gener. loss: 1.7813 descr. loss 1.0737                  
0:6900 gener. loss: 1.9694 descr. loss 0.97292                 
0:7000 gener. loss: 1.5334 descr. loss 0.90443                 
0:7100 gener. loss: 1.7204 descr. loss 1.3296                  
0:7200 gener. loss: 2.0678 descr. loss 0.97621                 
0:7300 gener. loss: 1.4034 descr. loss 0.84919                 
0:7400 gener. loss: 2.9117 descr. loss 0.74854                 
0:7500 gener. loss: 1.7181 descr. loss 0.76601                 
0:7600 gener. loss: 1.6939 descr. loss 1.6126                  
0:7700 gener. loss: 2.9702 descr. loss 0.45883                 
0:7800 gener. loss: 2.4608 descr. loss 0.92918                 
0:7900 gener. loss: 2.224 descr. loss 0.82599                  
0:8000 gener. loss: 1.4324 descr. loss 0.88434                 
0:8100 gener. loss: 2.8106 descr. loss 0.9986                  
0:8200 gener. loss: 1.9199 descr. loss 1.0636                  
0:8300 gener. loss: 3.1072 descr. loss 0.66617                 
0:8400 gener. loss: 2.1491 descr. loss 0.9333                  
0:8500 gener. loss: 2.3323 descr. loss 1.0854                  
0:8600 gener. loss: 2.4781 descr. loss 0.84215                 
0:8700 gener. loss: 1.354 descr. loss 1.4662                   
0:8800 gener. loss: 2.3546 descr. loss 0.90567                 
0:8900 gener. loss: 2.6121 descr. loss 0.55827                 
0:9000 gener. loss: 2.0152 descr. loss 0.93611                 
0:9100 gener. loss: 2.4841 descr. loss 0.80528                 
0:9200 gener. loss: 3.5052 descr. loss 0.77168                 
0:9300 gener. loss: 2.1981 descr. loss 0.96909                 
0:9400 gener. loss: 2.0481 descr. loss 0.94492                 
0:9500 gener. loss: 3.1032 descr. loss 0.60264                 
0:9600 gener. loss: 1.3761 descr. loss 0.95637                 
0:9700 gener. loss: 2.8548 descr. loss 1.3663                  
0:9800 gener. loss: 2.1587 descr. loss 1.2427                  
0:9900 gener. loss: 2.8219 descr. loss 0.75604                 
0:10000 gener. loss: 3.1066 descr. loss 0.81354                
0:10100 gener. loss: 1.5033 descr. loss 0.66996                
0:10200 gener. loss: 1.9493 descr. loss 1.2685                 
0:10300 gener. loss: 2.1413 descr. loss 1.1059                 
0:10400 gener. loss: 2.5109 descr. loss 0.35213                
0:10500 gener. loss: 1.2297 descr. loss 1.0047                 
0:10600 gener. loss: 2.3309 descr. loss 0.62919                
0:10700 gener. loss: 2.3513 descr. loss 0.95728                
0:10800 gener. loss: 1.9943 descr. loss 0.53778                
0:10900 gener. loss: 2.5231 descr. loss 0.74083                
0:11000 gener. loss: 2.0222 descr. loss 0.54431                
0:11100 gener. loss: 2.7516 descr. loss 0.44412                
0:11200 gener. loss: 3.0427 descr. loss 0.58542                
0:11300 gener. loss: 2.2071 descr. loss 0.76765                
0:11400 gener. loss: 2.5291 descr. loss 0.92613                
0:11500 gener. loss: 2.3776 descr. loss 0.86151                
0:11600 gener. loss: 2.5092 descr. loss 0.6675                 
0:11700 gener. loss: 2.3409 descr. loss 0.74365                
0:11800 gener. loss: 2.1053 descr. loss 1.6409                 
0:11900 gener. loss: 2.9929 descr. loss 0.64293                
0:12000 gener. loss: 3.1868 descr. loss 1.0116                 
0:12100 gener. loss: 2.7631 descr. loss 0.77084                
0:12200 gener. loss: 2.254 descr. loss 1.2429                  
0:12300 gener. loss: 2.1391 descr. loss 0.99185                
0:12400 gener. loss: 3.3662 descr. loss 0.60824                
0:12500 gener. loss: 1.7481 descr. loss 1.0365                 
0:12600 gener. loss: 2.1333 descr. loss 0.60246                
took 2870.77427268028262976 descr. loss 0.56111                

Submitting This Project

When submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as "dlnd_face_generation.ipynb" and save it as a HTML file under "File" -> "Download as". Include the "helper.py" and "problem_unittests.py" files in your submission.

In [ ]:
 
In [ ]: